halt ai research
Halt AI research? Doctors, public health experts call unchecked AI 'existential threat to humanity'
Medical experts have issued a fresh call to halt the development of artificial intelligence (AI), warning it poses an'existential threat' to people. A team of five doctors and global health policy experts from across four continents said there were three ways in which the tech could wipe out humans. First is the risk that AI will help amplify authoritarian tactics like surveillance and disinformation. 'The ability of AI to rapidly clean, organise and analyse massive data sets consisting of personal data, including images collected by the increasingly ubiquitous presence of cameras,' they say, could make it easier for authoritarian or totalitarian regimes to come to power and stay in power. Second, the group warns that AI can accelerate mass murder via the expanded use of Lethal Autonomous Weapon Systems (LAWS).
- North America > United States > California (0.05)
- Asia > China (0.05)
- Health & Medicine > Public Health (0.74)
- Health & Medicine > Consumer Health (0.53)
Why Halt AI Research When We Already Know How To Make It Safer
Last week, the Future of Life Institute published an open letter proposing a six-month moratorium on the "dangerous" AI race. It has since been signed by over 3,000 people, including some influential members of the AI community. But while it is good that the risks of AI systems are gathering visibility within the community and across society, both the issues described and the actions proposed in the letter are unrealistic and unnecessary. The call for a pause on AI work is not only vague, but also unfeasible. While the training of large language models by for-profit companies gets most of the attention, it is far from the only type of AI work taking place.
- North America > United States (0.16)
- North America > Canada (0.05)
- Government (0.50)
- Law (0.31)